A computational model of multi-modal grounding for human robot interaction
نویسندگان
چکیده
Dialog systems for mobile robots operating in the real world should enable mixedinitiative dialog style, handle multi-modal information involved in the communication and be relatively independent of the domain knowledge. Most dialog systems developed for mobile robots today, however, are often system-oriented and have limited capabilities. We present an agentbased dialog model that are specially designed for human-robot interaction and provide evidence for its efficiency with our implemented system.
منابع مشابه
Why and How to Model Multi-Modal Interaction for a Mobile Robot Companion
Verbal and non-verbal interaction capabilities for robots are often studied isolated from each other in current research trend because they largely contribute to different aspects of interaction. For a robot companion that needs to be both useful and social, however, these capabilities have to be considered in a unified, complex interaction context. In this paper we present two case studies in ...
متن کاملLearning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy"
Grounded language learning bridges words like ‘red’ and ‘square’ with robot perception. The vast majority of existing work in this space limits robot perception to vision. In this paper, we build perceptual models that use haptic, auditory, and proprioceptive data acquired through robot exploratory behaviors to go beyond vision. Our system learns to ground natural language words describing obje...
متن کاملGrounded spoken language acquisition: experiments in word learning
| Language is grounded in sensory-motor experience. Grounding connects concepts to the physical world enabling humans to acquire and use words and sentences in context. Currently most machines which process language are not grounded. Instead, semantic representations are abstract, pre-speci ed, and have meaning only when interpreted by humans. We are interested in developing computational syste...
متن کاملOpportunities and Obligations to Take Turns in Collaborative Multi-Party Human-Robot Interaction
In this paper we present a data-driven model for detecting opportunities and obligations for a robot to take turns in multi-party discussions about objects. The data used for the model was collected in a public setting, where the robot head Furhat played a collaborative card sorting game together with two users. The model makes a combined detection of addressee and turn-yielding cues, using mul...
متن کاملCrossmodal Language Grounding, Learning, and Teaching
The human brain as one of the most complex dynamic systems enables us to communicate and externalise information by natural language. Despite extensive research, human-like communication with interactive robots is not yet possible, because we have not yet fully understood the mechanistic characteristics of the crossmodal binding between language, actions, and visual sensation that enable humans...
متن کامل